381 research outputs found
A Kind of Wireless Sensor Network Coverage Optimization Algorithm Based on Genetic PSO
Aiming to the problems of slow convergence speed and being easily early-maturing etc. in existing network based on standard particle swarm algorithm, this paper proposes a wireless sensor network coverage optimization algorithm based on the genetic particle swarm optimization (PSO). The wireless sensor maximum coverage is regarded as the objective function, the genetic algorithm with adaptive crossover and mutation factors is used to search the solution space, and the powerful global search ability of PSO is used to increase search scope to make particle covering more efficient, which both strengthen algorithm optimization ability, improve the nodes coverage, and solve early-mature problem. Comparing with the standard traditional genetic algorithm and new quantum genetic algorithm, simulation results present that the rate of coverage in this algorithm increases by 2.28 % and 0.65 % respectively; and convergence speed is also improved, therefore this method can effectively realize the wireless sensor network coverage optimization
Building Damage, Death and Downtime Risk Attenuation in Earthquakes
Whether it is for pre-event prevention and preparedness or for post-event response and recovery of a catastrophic earthquake, estimates of damage, death and downtime (3d) losses are needed by engineers, owners, and policy makers. In this research, a quantitative "scenario-based" risk analysis approach as developed to investigate the 3d losses for buildings. The "Redbook Building" is taken as the typical New Zealand construction exemplar and analyzed for the 22 February 2011 Christchurch Earthquake. Losses are presented in the form of attenuation curves that also include the associated uncertainties. The spatial distribution of 3d damages over the height of buildings is also considered. It is thus shown that it is possible to discriminate between losses that lead to building replacement versus less severe losses that require structures to be repaired.
The 3d loss results show that within the Christchurch city (17 km radial distance from the earthquake epicenter): (a) the expected physical damage loss ratio is about 50% of the property value; (b) the expected probability that someone is killed or seriously injured is about 4%; and (c) the expected downtime for the building being out of service is about 24 weeks. However, when considering various uncertainties, one can have 90% confidence that these loss estimations will be as high as: (a) complete loss (100% physical damage), implying structure has a great chance of collapse; (b) 8% possibility of fatality, implying deaths and significant injuries are likely; and (c) 1-year downtime due to post-event reconstruction demand surge. These informative results demonstrate that even though structures, such as the "Redbook Building", may have been well designed and constructed to contemporary standards, significant damage can still be expected and the downtime loss is particularly large. In order to solve this problem, new building structures should ideally be built stronger, include recentering attributes, and use Damage Avoidance Design (DAD) armoring connection details
LatEval: An Interactive LLMs Evaluation Benchmark with Incomplete Information from Lateral Thinking Puzzles
With the continuous evolution and refinement of LLMs, they are endowed with
impressive logical reasoning or vertical thinking capabilities. But can they
think out of the box? Do they possess proficient lateral thinking abilities?
Following the setup of Lateral Thinking Puzzles, we propose a novel evaluation
benchmark, LatEval, which assesses the model's lateral thinking within an
interactive framework. In our benchmark, we challenge LLMs with 2 aspects: the
quality of questions posed by the model and the model's capability to integrate
information for problem-solving. We find that nearly all LLMs struggle with
employing lateral thinking during interactions. For example, even the most
advanced model, GPT-4, exhibits the advantage to some extent, yet still
maintain a noticeable gap when compared to human. This evaluation benchmark
provides LLMs with a highly challenging and distinctive task that is crucial to
an effective AI assistant.Comment: Work in progres
English Broadcast News Speech Recognition by Humans and Machines
With recent advances in deep learning, considerable attention has been given
to achieving automatic speech recognition performance close to human
performance on tasks like conversational telephone speech (CTS) recognition. In
this paper we evaluate the usefulness of these proposed techniques on broadcast
news (BN), a similar challenging task. We also perform a set of recognition
measurements to understand how close the achieved automatic speech recognition
results are to human performance on this task. On two publicly available BN
test sets, DEV04F and RT04, our speech recognition system using LSTM and
residual network based acoustic models with a combination of n-gram and neural
network language models performs at 6.5% and 5.9% word error rate. By achieving
new performance milestones on these test sets, our experiments show that
techniques developed on other related tasks, like CTS, can be transferred to
achieve similar performance. In contrast, the best measured human recognition
performance on these test sets is much lower, at 3.6% and 2.8% respectively,
indicating that there is still room for new techniques and improvements in this
space, to reach human performance levels.Comment: \copyright 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Recommended from our members
Improved Antifouling Properties of Polyamide Nanofiltration Membranes by Reducing the Density of Surface Carboxyl Groups
Carboxyls are inherent functional groups of thin-film composite polyamide nanofiltration (NF) membranes, which may play a role in membrane performance and fouling. Their surface presence is attributed to incomplete reaction of acyl chloride monomers during the membrane active layer synthesis by interfacial polymerization. In order to unravel the effect of carboxyl group density on organic fouling, NF membranes were fabricated by reacting piperazine (PIP) with either isophthaloyl chloride (IPC) or the more commonly used trimesoyl chloride (TMC). Fouling experiments were conducted with alginate as a model hydrophilic organic foulant in a solution, simulating the composition of municipal secondary effluent. Improved antifouling properties were observed for the IPC membrane, which exhibited lower flux decline (40%) and significantly greater fouling reversibility or cleaning efficiency (74%) than the TMC membrane (51% flux decline and 40% cleaning efficiency). Surface characterization revealed that there was a substantial difference in the density of surface carboxyl groups between the IPC and TMC membranes, while other surface properties were comparable. The role of carboxyl groups was elucidated by measurements of foulant-surface intermolecular forces by atomic force microscopy, which showed lower adhesion forces and rupture distances for the IPC membrane compared to TMC membranes in the presence of calcium ions in solution. Our results demonstrated that a decrease in surface carboxyl group density of polyamide membranes fabricated with IPC monomers can prevent calcium bridging with alginate and, thus, improve membrane antifouling properties
Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity Linking
Entity Linking (EL) is a fundamental task for Information Extraction and
Knowledge Graphs. The general form of EL (i.e., end-to-end EL) aims to first
find mentions in the given input document and then link the mentions to
corresponding entities in a specific knowledge base. Recently, the paradigm of
retriever-reader promotes the progress of end-to-end EL, benefiting from the
advantages of dense entity retrieval and machine reading comprehension.
However, the existing study only trains the retriever and the reader separately
in a pipeline manner, which ignores the benefit that the interaction between
the retriever and the reader can bring to the task. To advance the
retriever-reader paradigm to perform more perfectly on end-to-end EL, we
propose BEER, a Bidirectional End-to-End training framework for Retriever
and Reader. Through our designed bidirectional end-to-end training, BEER
guides the retriever and the reader to learn from each other, make progress
together, and ultimately improve EL performance. Extensive experiments on
benchmarks of multiple domains demonstrate the effectiveness of our proposed
BEER.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
- …